AIP-76: Consume task-emitted partition keys on asset events#66782
Open
anishgirianish wants to merge 2 commits into
Open
AIP-76: Consume task-emitted partition keys on asset events#66782anishgirianish wants to merge 2 commits into
anishgirianish wants to merge 2 commits into
Conversation
1 task
Lee-W
reviewed
May 13, 2026
40b2e65 to
5d12baa
Compare
Contributor
Author
|
Hi @Lee-W, thank you so much for the review. I have addressed the feedback in the latest push. I would like to request you for your re-review whenever you get a chance. Thank you. |
Lee-W
reviewed
May 14, 2026
Comment on lines
+1486
to
+1512
| payloads_by_asset: dict[SerializedAssetUniqueKey, list[OutletEventPayload]] = defaultdict(list) | ||
| for outlet_event in outlet_events: | ||
| # Alias-emitted events are handled separately further down via | ||
| # register_asset_change_for_alias, which uses the DagRun-level | ||
| # partition_key. Per-emission partition keys do not fan out through | ||
| # the alias path — emission via an alias produces one event per | ||
| # resolved asset, all carrying the same dag_run_partition_key. | ||
| if "source_alias_name" in outlet_event: | ||
| continue | ||
| asset_key = SerializedAssetUniqueKey(**outlet_event["dest_asset_key"]) | ||
| payloads_by_asset[asset_key].append( | ||
| OutletEventPayload( | ||
| extra=outlet_event["extra"], partition_key=outlet_event.get("partition_key") | ||
| ) | ||
| ) | ||
|
|
||
| # Back-fill DagRun.partition_key from the task emission when the task | ||
| # emitted exactly one distinct partition_key across all outlet events | ||
| # and the DagRun did not already have one set. This lets a task that | ||
| # discovers the partition at runtime (rather than via params) act as | ||
| # the source of truth for the DagRun-level key. | ||
| runtime_pks: set[str] = { | ||
| payload.partition_key | ||
| for payloads in payloads_by_asset.values() | ||
| for payload in payloads | ||
| if payload.partition_key is not None | ||
| } |
Member
There was a problem hiding this comment.
Suggested change
| payloads_by_asset: dict[SerializedAssetUniqueKey, list[OutletEventPayload]] = defaultdict(list) | |
| for outlet_event in outlet_events: | |
| # Alias-emitted events are handled separately further down via | |
| # register_asset_change_for_alias, which uses the DagRun-level | |
| # partition_key. Per-emission partition keys do not fan out through | |
| # the alias path — emission via an alias produces one event per | |
| # resolved asset, all carrying the same dag_run_partition_key. | |
| if "source_alias_name" in outlet_event: | |
| continue | |
| asset_key = SerializedAssetUniqueKey(**outlet_event["dest_asset_key"]) | |
| payloads_by_asset[asset_key].append( | |
| OutletEventPayload( | |
| extra=outlet_event["extra"], partition_key=outlet_event.get("partition_key") | |
| ) | |
| ) | |
| # Back-fill DagRun.partition_key from the task emission when the task | |
| # emitted exactly one distinct partition_key across all outlet events | |
| # and the DagRun did not already have one set. This lets a task that | |
| # discovers the partition at runtime (rather than via params) act as | |
| # the source of truth for the DagRun-level key. | |
| runtime_pks: set[str] = { | |
| payload.partition_key | |
| for payloads in payloads_by_asset.values() | |
| for payload in payloads | |
| if payload.partition_key is not None | |
| } | |
| payloads_by_asset: dict[SerializedAssetUniqueKey, list[OutletEventPayload]] = defaultdict(list) | |
| runtime_pks: set[str] = set() | |
| for outlet_event in outlet_events: | |
| if "source_alias_name" in outlet_event: | |
| continue | |
| asset_key = SerializedAssetUniqueKey(**outlet_event["dest_asset_key"]) | |
| partition_key = outlet_event.get("partition_key") | |
| payloads_by_asset[asset_key].append( | |
| OutletEventPayload(extra=outlet_event["extra"], partition_key=partition_key) | |
| ) | |
| if partition_key is not None: | |
| runtime_pks.add(partition_key) |
Comment on lines
+1559
to
+1561
| partition_key=payload.partition_key | ||
| if payload.partition_key is not None | ||
| else dag_run_partition_key, |
Member
There was a problem hiding this comment.
why would it be None and fallback to the DagRun one?
Member
There was a problem hiding this comment.
I kinda feel the user should be responsible for providing the partitions if they want to do a runtime one 🤔
WDYT? or is there use cases we need to fallback to DagRun?
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Was generative AI tooling used to co-author this PR?
Tasks can record per-emission partition keys via outlet_events[asset].add_partitions(...) (shipped in #65447).
Persist each on the matching AssetEvent row, fanning runtime fan-out emissions into one event per key, and back-fill
DagRun.partition_key when the task emitted exactly one key on a run that had none.
closes: #58474
related: #44146 #65300
cc @Lee-W
{pr_number}.significant.rst, in airflow-core/newsfragments. You can add this file in a follow-up commit after the PR is created so you know the PR number.